Synthesizing and Evaluating Animations of American Sign Language Verbs Modeled from Motion-Capture Data
نویسندگان
چکیده
Animations of American Sign Language (ASL) can make information accessible for many signers with lower levels of English literacy. Automatically synthesizing such animations is challenging because the movements of ASL signs often depend on the context in which they appear, e.g., many ASL verb movements depend on locations in the signing space the signer has associated with the verb’s subject and object. This paper presents several techniques for automatically synthesizing novel instances of ASL verbs whose motion-path and hand-orientation must accurately reflect the subject and object locations in 3D space, including enhancements to to prior state-of-the-art models. Using these models, animation generation software could produce an infinite variety of indicating verb instances. Using a corpus of motion-capture recordings of multiple performances of eight ASL indicating verbs, we modeled the signer’s hand locations and orientations during each verb, dependent upon the location in the signing space where the subject and object were positioned. In a user study, ASL signers watched animations that included verbs synthesized from these models, and we found that they had similar quality to those produced by a human animator.
منابع مشابه
Synthesizing American Sign Language Spatially Inflected Verbs from Motion-Capture Data
People who are deaf or hard-of-hearing who have lower levels of written-language literacy can benefit from computer-synthesized animations of sign language, which present information in a more accessible form. This paper introduces a novel method for modeling and synthesizing American Sign Language (ASL) animations based on motion-capture data collected from native signers. This technique allow...
متن کاملSelecting Exemplar Recordings of American Sign Language Non-Manual Expressions for Animation Synthesis Based on Manual Sign Timing
Animations of sign language can increase the accessibility of information for people who are deaf or hard of hearing (DHH), but prior work has demonstrated that accurate non-manual expressions (NMEs), consisting of face and head movements, are necessary to produce linguistically accurate animations that are easy to understand. When synthesizing animation, given a sequence of signs performed on ...
متن کاملLearning a Vector-Based Model of American Sign Language Inflecting Verbs from Motion-Capture Data
American Sign Language (ASL) synthesis software can improve the accessibility of information and services for deaf individuals with low English literacy. The synthesis component of current ASL animation generation and scripting systems have limited handling of the many ASL verb signs whose movement path is inflected to indicate 3D locations in the signing space associated with discourse referen...
متن کاملCollecting and evaluating the CUNY ASL corpus for research on American Sign Language animation
While there is great potential for sign language animation generation software to improve the accessibility of information for deaf individuals with low written-language literacy, the understandability of current sign language animation systems is limited. Data-driven methodologies using annotated sign language corpora encoding detailed human movement have enabled some researchers to address se...
متن کاملLearning to Generate Understandable Animations of American Sign Language
Standardized testing has revealed that many deaf adults in the U.S. have lower levels of written English literacy; providing American Sign Language (ASL) on websites can make information and services more accessible. Unfortunately, video recordings of human signers are difficult to update when information changes, and there is no way to support just-in-time generation of web content from a quer...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015